Skip to content

[None][chore] Add Dynamo configs to TRTLLM CI - Agg#13171

Open
brb-nv wants to merge 2 commits intoNVIDIA:mainfrom
brb-nv:user/brb/mirror-dynamo-configs-in-trtllm-agg
Open

[None][chore] Add Dynamo configs to TRTLLM CI - Agg#13171
brb-nv wants to merge 2 commits intoNVIDIA:mainfrom
brb-nv:user/brb/mirror-dynamo-configs-in-trtllm-agg

Conversation

@brb-nv
Copy link
Copy Markdown
Collaborator

@brb-nv brb-nv commented Apr 18, 2026

Description

This MR adds Dynamo configs to TRTLLM CI to catch issues early. This MR has agg configs.

Test Coverage

N/A

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

Summary by CodeRabbit

  • New Features

    • Added performance sanity test support for DGX H200 (8 GPUs), GB200 multi-GPU/multi-node, and B200 platforms.
    • Introduced test configurations for new AI models: Qwen3 32B FP8, Qwen3 235B A22B FP8, DeepSeek-V3.2 FP4, K25 Thinking FP4, and GPT-OSS-120B FP4.
  • Tests

    • Extended model path dictionary with Qwen3 32B FP8 support.
    • Added multiple new performance sanity test entries across test suites for model/hardware variant combinations.

@brb-nv brb-nv requested review from a team as code owners April 18, 2026 00:02
@brb-nv brb-nv requested review from EmmaQiaoCh and mlefeb01 April 18, 2026 00:02
Comment thread jenkins/L0_Test.groovy
"DGX_B200-8_GPUs-PyTorch-PerfSanity-Post-Merge-2": ["auto:dgx-b200-flex", "l0_b200_multi_gpus_perf_sanity", 2, 4, 8, 1, true],
"DGX_B200-8_GPUs-PyTorch-PerfSanity-Post-Merge-3": ["auto:dgx-b200-flex", "l0_b200_multi_gpus_perf_sanity", 3, 4, 8, 1, true],
"DGX_B200-8_GPUs-PyTorch-PerfSanity-Post-Merge-4": ["auto:dgx-b200-flex", "l0_b200_multi_gpus_perf_sanity", 4, 4, 8, 1, true],
"DGX_H200-8_GPUs-PyTorch-PerfSanity-Post-Merge-1": ["dgx-h200-x8", "l0_dgx_h200_perf_sanity", 1, 1, 8, 1, true],
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question for infra reviewer: This is adding a new stage. Is there anything else we need to do?

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 18, 2026

📝 Walkthrough

Walkthrough

This PR adds support for new performance sanity test configurations and models. Changes include a Jenkins stage definition for DGX H200 GPUs, registration of the Qwen3 32B FP8 model, new test entries across multiple test lists, and performance sanity YAML configurations for model deployments (Qwen3, GPT-OSS-120B, DeepSeek-V3.2, K25-thinking) on Blackwell and Hopper GPUs with specific tensor parallelism and throughput constraints.

Changes

Cohort / File(s) Summary
CI Pipeline Configuration
jenkins/L0_Test.groovy
Added a new Slurm perf-sanity post-merge stage entry for DGX H200 with 8 GPUs, mapping to platform dgx-h200-x8 and YAML config l0_dgx_h200_perf_sanity.
Model Registry
tests/integration/defs/perf/test_perf_sanity.py
Extended MODEL_PATH_DICT with new model key qwen3_32b_fp8 mapped to Hugging Face model path "Qwen3/Qwen3-32B-FP8".
Test List Configurations
tests/integration/test_lists/test-db/l0_*.yml
Added new perf sanity test entries across multiple test suites: l0_b200_multi_gpus_perf_sanity.yml (1 entry for Qwen3 variant), l0_gb200_multi_gpus_perf_sanity.yml (2 entries for GPT-OSS), l0_gb200_multi_nodes_perf_sanity_node2_gpu8.yml (1 entry for DeepSeek multi-node), and created new l0_dgx_h200_perf_sanity.yml with 2 performance sanity tests for K25-thinking and Qwen3 variants.
Perf Sanity YAML Configs
tests/scripts/perf-sanity/aggregated/dynamo_*.yaml
Added 6 new performance sanity configuration files defining model metadata, hardware layout, server configs (tensor/expert/pipeline parallelism, batch sizes, throughput limits), TRT-LLM attention backends, KV cache settings, and client workload definitions for: Qwen3 235B/32B FP8 (H200), GPT-OSS-120B FP4 (B200), DeepSeek-V3.2 FP4 (GB200 multi-node), and K25-thinking FP4 (B200).

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

Possibly related PRs

Suggested reviewers

  • mlefeb01
  • EmmaQiaoCh
  • dc3671
  • ruodil
  • ZhanruiSunCh
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the main change: adding Dynamo configs to TRTLLM CI, specifically aggregated configs.
Description check ✅ Passed The PR description follows the template structure with Description and Test Coverage sections filled in, though Test Coverage is marked N/A and the description explanation is minimal.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/integration/defs/perf/test_perf_sanity.py (1)

1-1: ⚠️ Potential issue | 🟠 Major

Update the copyright year in the modified Python source header.

Line 1 still says 2022-2025 even though this file is modified in 2026.

✅ Proposed fix
-# SPDX-FileCopyrightText: Copyright (c) 2022-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-FileCopyrightText: Copyright (c) 2022-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.

As per coding guidelines, "All TensorRT-LLM source files must contain an NVIDIA copyright header with the year of latest meaningful modification."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/integration/defs/perf/test_perf_sanity.py` at line 1, Update the source
header line that currently reads "2022-2025" to include the latest modification
year (e.g., change to "2022-2026") so the file header reflects the 2026
modification; locate the top-of-file copyright header in test_perf_sanity.py and
replace the year range accordingly.
🧹 Nitpick comments (1)
tests/scripts/perf-sanity/aggregated/dynamo_qwen3_235b_a22b_fp8_hopper.yaml (1)

15-15: Harden remote-code execution in CI configs.

trust_remote_code: true is understandable for some model stacks, but it weakens CI safety. Please document why it is required here and prefer pinned/immutable model revisions when remote code is enabled.

Also applies to: 47-47

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/scripts/perf-sanity/aggregated/dynamo_qwen3_235b_a22b_fp8_hopper.yaml`
at line 15, The CI config enables unsafe remote-code execution via the YAML key
trust_remote_code: true; update this to either trust_remote_code: false or, if
remote code is required for this model, add an inline comment and a pinned
immutable model revision (e.g., set model_revision to a specific commit/sha or
exact model version) and document in the file why remote code is necessary for
the model stack so reviewers can audit the change; ensure the same change is
applied where trust_remote_code appears elsewhere in this file.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@tests/integration/defs/perf/test_perf_sanity.py`:
- Line 1: Update the source header line that currently reads "2022-2025" to
include the latest modification year (e.g., change to "2022-2026") so the file
header reflects the 2026 modification; locate the top-of-file copyright header
in test_perf_sanity.py and replace the year range accordingly.

---

Nitpick comments:
In `@tests/scripts/perf-sanity/aggregated/dynamo_qwen3_235b_a22b_fp8_hopper.yaml`:
- Line 15: The CI config enables unsafe remote-code execution via the YAML key
trust_remote_code: true; update this to either trust_remote_code: false or, if
remote code is required for this model, add an inline comment and a pinned
immutable model revision (e.g., set model_revision to a specific commit/sha or
exact model version) and document in the file why remote code is necessary for
the model stack so reviewers can audit the change; ensure the same change is
applied where trust_remote_code appears elsewhere in this file.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro Plus

Run ID: 51dc1d5b-dcac-462b-a4c0-524e8ccaedca

📥 Commits

Reviewing files that changed from the base of the PR and between 813d877 and 12e9200.

📒 Files selected for processing (11)
  • jenkins/L0_Test.groovy
  • tests/integration/defs/perf/test_perf_sanity.py
  • tests/integration/test_lists/test-db/l0_b200_multi_gpus_perf_sanity.yml
  • tests/integration/test_lists/test-db/l0_dgx_h200_perf_sanity.yml
  • tests/integration/test_lists/test-db/l0_gb200_multi_gpus_perf_sanity.yml
  • tests/integration/test_lists/test-db/l0_gb200_multi_nodes_perf_sanity_node2_gpu8.yml
  • tests/scripts/perf-sanity/aggregated/dynamo_deepseek_v32_fp4_2_nodes_grace_blackwell.yaml
  • tests/scripts/perf-sanity/aggregated/dynamo_gpt_oss_120b_fp4_blackwell.yaml
  • tests/scripts/perf-sanity/aggregated/dynamo_k25_thinking_fp4_blackwell.yaml
  • tests/scripts/perf-sanity/aggregated/dynamo_qwen3_235b_a22b_fp8_hopper.yaml
  • tests/scripts/perf-sanity/aggregated/dynamo_qwen3_32b_fp8_hopper.yaml

Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
@brb-nv brb-nv force-pushed the user/brb/mirror-dynamo-configs-in-trtllm-agg branch from 12e9200 to afecdd0 Compare April 18, 2026 00:10
@brb-nv
Copy link
Copy Markdown
Collaborator Author

brb-nv commented Apr 18, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #44078 [ run ] triggered by Bot. Commit: afecdd0 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #44078 [ run ] completed with state SUCCESS. Commit: afecdd0
/LLM/main/L0_MergeRequest_PR pipeline #34508 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@brb-nv
Copy link
Copy Markdown
Collaborator Author

brb-nv commented Apr 18, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #44127 [ run ] triggered by Bot. Commit: afecdd0 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #44127 [ run ] completed with state SUCCESS. Commit: afecdd0
/LLM/main/L0_MergeRequest_PR pipeline #34554 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@brb-nv
Copy link
Copy Markdown
Collaborator Author

brb-nv commented Apr 19, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #44140 [ run ] triggered by Bot. Commit: afecdd0 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #44140 [ run ] completed with state SUCCESS. Commit: afecdd0
/LLM/main/L0_MergeRequest_PR pipeline #34567 completed with status: 'SUCCESS'

CI Report

Link to invocation

@brb-nv
Copy link
Copy Markdown
Collaborator Author

brb-nv commented Apr 20, 2026

/bot help

@github-actions
Copy link
Copy Markdown

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental) --high-priority]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

--high-priority (OPTIONAL) : Run the pipeline with high priority. This option is restricted to authorized users only and will route the job to a high-priority queue.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@brb-nv
Copy link
Copy Markdown
Collaborator Author

brb-nv commented Apr 20, 2026

/bot run --stage-list "GB200-8_GPUs-2_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE1-GPU1-GEN1-NODE1-GPU4-Post-Merge, GB200-16_GPUs-4_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE2-GPU8-GEN1-NODE2-GPU8-Post-Merge, GB200-36_GPUs-9_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE1-GPU4-GEN1-NODE8-GPU32-Post-Merge"

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #44530 [ run ] triggered by Bot. Commit: 7aeb9ce Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #44530 [ run ] completed with state FAILURE. Commit: 7aeb9ce
/LLM/main/L0_MergeRequest_PR pipeline #34928 (Partly Tested) completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@brb-nv
Copy link
Copy Markdown
Collaborator Author

brb-nv commented Apr 20, 2026

/bot run --stage-list "GB200-8_GPUs-2_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE1-GPU1-GEN1-NODE1-GPU4-Post-Merge-1, GB200-8_GPUs-2_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE1-GPU1-GEN1-NODE1-GPU4-Post-Merge-2, GB200-8_GPUs-2_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE1-GPU1-GEN1-NODE1-GPU4-Post-Merge-3, GB200-8_GPUs-2_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE1-GPU1-GEN1-NODE1-GPU4-Post-Merge-4, GB200-16_GPUs-4_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE2-GPU8-GEN1-NODE2-GPU8-Post-Merge-1, GB200-36_GPUs-9_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE1-GPU4-GEN1-NODE8-GPU32-Post-Merge-1, GB200-36_GPUs-9_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE1-GPU4-GEN1-NODE8-GPU32-Post-Merge-2, GB200-36_GPUs-9_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE1-GPU4-GEN1-NODE8-GPU32-Post-Merge-3, GB200-36_GPUs-9_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE1-GPU4-GEN1-NODE8-GPU32-Post-Merge-4, GB200-36_GPUs-9_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE1-GPU4-GEN1-NODE8-GPU32-Post-Merge-5, GB200-36_GPUs-9_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE1-GPU4-GEN1-NODE8-GPU32-Post-Merge-6, GB200-36_GPUs-9_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE1-GPU4-GEN1-NODE8-GPU32-Post-Merge-7"

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #44536 [ run ] triggered by Bot. Commit: 7aeb9ce Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #44536 [ run ] completed with state SUCCESS. Commit: 7aeb9ce
/LLM/main/L0_MergeRequest_PR pipeline #34931 (Partly Tested) completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@brb-nv
Copy link
Copy Markdown
Collaborator Author

brb-nv commented Apr 21, 2026

/bot run --stage-list "GB200-8_GPUs-2_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE1-GPU1-GEN1-NODE1-GPU4-Post-Merge-1, GB200-8_GPUs-2_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE1-GPU1-GEN1-NODE1-GPU4-Post-Merge-2, GB200-8_GPUs-2_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE1-GPU1-GEN1-NODE1-GPU4-Post-Merge-3, GB200-8_GPUs-2_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE1-GPU1-GEN1-NODE1-GPU4-Post-Merge-4, GB200-16_GPUs-4_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE2-GPU8-GEN1-NODE2-GPU8-Post-Merge-1, GB200-36_GPUs-9_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE1-GPU4-GEN1-NODE8-GPU32-Post-Merge-1, GB200-36_GPUs-9_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE1-GPU4-GEN1-NODE8-GPU32-Post-Merge-2, GB200-36_GPUs-9_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE1-GPU4-GEN1-NODE8-GPU32-Post-Merge-3, GB200-36_GPUs-9_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE1-GPU4-GEN1-NODE8-GPU32-Post-Merge-4, GB200-36_GPUs-9_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE1-GPU4-GEN1-NODE8-GPU32-Post-Merge-5, GB200-36_GPUs-9_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE1-GPU4-GEN1-NODE8-GPU32-Post-Merge-6, GB200-36_GPUs-9_Nodes-PyTorch-Disagg-PerfSanity-CTX1-NODE1-GPU4-GEN1-NODE8-GPU32-Post-Merge-7" --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #44608 [ run ] triggered by Bot. Commit: 7aeb9ce Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #44608 [ run ] completed with state SUCCESS. Commit: 7aeb9ce
/LLM/main/L0_MergeRequest_PR pipeline #34991 (Partly Tested) completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants